13 research outputs found
Uncertainty-aware 3D Object-Level Mapping with Deep Shape Priors
3D object-level mapping is a fundamental problem in robotics, which is
especially challenging when object CAD models are unavailable during inference.
In this work, we propose a framework that can reconstruct high-quality
object-level maps for unknown objects. Our approach takes multiple RGB-D images
as input and outputs dense 3D shapes and 9-DoF poses (including 3 scale
parameters) for detected objects. The core idea of our approach is to leverage
a learnt generative model for shape categories as a prior and to formulate a
probabilistic, uncertainty-aware optimization framework for 3D reconstruction.
We derive a probabilistic formulation that propagates shape and pose
uncertainty through two novel loss functions. Unlike current state-of-the-art
approaches, we explicitly model the uncertainty of the object shapes and poses
during our optimization, resulting in a high-quality object-level mapping
system. Moreover, the resulting shape and pose uncertainties, which we
demonstrate can accurately reflect the true errors of our object maps, can also
be useful for downstream robotics tasks such as active vision. We perform
extensive evaluations on indoor and outdoor real-world datasets, achieving
achieves substantial improvements over state-of-the-art methods. Our code will
be available at https://github.com/TRAILab/UncertainShapePose.Comment: Manuscript submitted to ICRA 202
POCD: Probabilistic Object-Level Change Detection and Volumetric Mapping in Semi-Static Scenes
Maintaining an up-to-date map to reflect recent changes in the scene is very
important, particularly in situations involving repeated traversals by a robot
operating in an environment over an extended period. Undetected changes may
cause a deterioration in map quality, leading to poor localization, inefficient
operations, and lost robots. Volumetric methods, such as truncated signed
distance functions (TSDFs), have quickly gained traction due to their real-time
production of a dense and detailed map, though map updating in scenes that
change over time remains a challenge. We propose a framework that introduces a
novel probabilistic object state representation to track object pose changes in
semi-static scenes. The representation jointly models a stationarity score and
a TSDF change measure for each object. A Bayesian update rule that incorporates
both geometric and semantic information is derived to achieve consistent online
map maintenance. To extensively evaluate our approach alongside the
state-of-the-art, we release a novel real-world dataset in a warehouse
environment. We also evaluate on the public ToyCar dataset. Our method
outperforms state-of-the-art methods on the reconstruction quality of
semi-static environments.Comment: Published in Robotics: Science and Systems (RSS) 202
Foot measurements from 2D digital images
Foot measurements play an important role in the design of comfortable footwear products. This study proposed a non-invasive and efficient means to obtain foot measurements from 2D digital foot images. The hardware of the proposed image-based measuring system was easy to set up and the developed measuring system was tested for 9 foot measurements with ten male subjects who were also manually measured. The comparison between foot measurements from the image-based and the traditionally manual measured systems showed that there were no significant differences between two systems on 8 out of 9 foot measurements. The errors on foot measurements from the image-based system were also analyzed and discussed. The proposed image-based system under further improvements may be applied into the online sales of shoes, especially customized shoes
Boreas: A Multi-Season Autonomous Driving Dataset
The Boreas dataset was collected by driving a repeated route over the course
of one year, resulting in stark seasonal variations and adverse weather
conditions such as rain and falling snow. In total, the Boreas dataset contains
over 350km of driving data featuring a 128-channel Velodyne Alpha-Prime lidar,
a 360 degree Navtech CIR304-H scanning radar, a 5MP FLIR Blackfly S camera, and
centimetre-accurate post-processed ground truth poses. At launch, our dataset
will support live leaderboards for odometry, metric localization, and 3D object
detection. The dataset and development kit are available at:
https://www.boreas.utias.utoronto.caComment: Submitted to IJRR as a data pape